25 research outputs found

    Cyberinfrastructure for Scalable Access to Stream Flow Analysis

    Get PDF
    Traditionally the various components of flow analysis including flooding, drought, base - flow, pollutant loading, and duration curves have been examined independently by various analysis methods or software packages. A better approach would be to combine these multiple packages into a single web - tool to improve access . Infrastructure - as - a - Service (IaaS) cloud provides a scalable infrastructure for model implementation , which is a necessity of web services due to the characteristics of web traffic. IaaS centralizes the computational burden and overhead of multiple model runs from local computers to online servers. This paper demonstrates the scalability benefits of the Comprehensive Flow Analysis (CFA) tool in an IaaS environment . The CFA tool is available through the Environmental Risk Assessment Management System (eRAMS) website. eRAMS facilitates GIS data manipulation, visualization, and preparation of input information for models lik CFA . eRAMS uses the Cloud Services Innovation Platform (CSIP) to request runs of the analyses with in CFA. CSIP is an IaaS cloud modeling framework designed for executing various environmental models. This paper summarizes a scalability analysis of the analysis methods within CFA using CSIP in a cloud server environment

    Dynamic Scaling for Service Oriented Applications: Implications of Virtual Machine Placement on IaaS Clouds

    Get PDF
    Abstraction of physical hardware using infrastructure-as-a-service (IaaS) clouds leads to the simplistic view that resources are homogeneous and that infinite scaling is possible with linear increases in performance. Support for autonomic scaling of multi-tier service oriented applications requires determination of when, what, and where to scale. \u27When\u27 is addressed by hotspot detection schemes using techniques including performance modeling and time series analysis. \u27What\u27 relates to determining the quantity and size of new resources to provision. \u27Where\u27 involves identification of the best location(s) to provision new resources. In this paper we investigate primarily \u27where\u27 new infrastructure should be provisioned, and secondly \u27what\u27 the infrastructure should be. Dynamic scaling of infrastructure for service oriented applications requires rapid response to changes in demand to meet application quality-of-service requirements. We investigate the performance and resource cost implications of VM placement when dynamically scaling server infrastructure of service oriented applications . We evaluate dynamic scaling in the context of providing modeling-as-a-service for two environmental science models

    Service Isolation vs. Consolidation: Implications for Iaas Cloud Application Deployment

    Get PDF
    Service isolation, achieved by deploying components of multi -tier applications using separate virtual machines (VMs), is a common \u27best\u27 practice. Various advantages cited include simpler deployment architectures, easier resource scalability for supporting dynamic application throughput requirements, and support for component-level fault tolerance . This paper presents results from an empirical study which investigates the performance implications of component placement for deployments of multi -tier applications to Infrastructure-as-a- Service (IaaS) clouds. Relationship s between performance and resource utilization (CPU, disk, network) are investigated to better understand the implications which result from how applications are deployed. All possible deployments for two variants of a multi -tier application were tested, one computationally bound by the model, the other bound by a geospatial database. The best performing deployments required as few as 2 VMs, half the number required for service isolation, demonstrating potential cost savings with service consolidation. Resource use (CPU time, disk I/O, and network I/O) varied based on component placement and VM memory allocation. Using separate VMs to host each application component resulted in performance overhead of ~1 -2%. Relationships between resource utilization an d performance were harnessed to build a multiple linear regression model to predict performance of component deployments. CPU time, disk sector reads, and disk sector writes are identified as the most powerful performance predictors for component deployments

    Performance Modeling to Support Multi-Tier Application Deployment to Infrastructure-As-A-Service Clouds

    Get PDF
    Infrastructure-as-a-service (IaaS) clouds support migration of multi-tier applications through virtualization of diverse application stack(s) of components which may require various operating systems and environments. To maximize performance of applications deployed to IaaS clouds while minimizing deployment costs, it is necessary to create virtual machine images to host application components with consideration for component dependencies that may affect load balancing of physical resources of VM hosts including CPU time, disk and network bandwidth. This paper presents results of an investigation utilizing physical machine (PM) and virtual machine (VM) resource utilization statistics to build performance models to predict application performance and rank performance of application component deployment configurations deployed across VMs. Our objective was to predict which component compositions provide best performance while requiring the fewest number of VMs. Eighteen individual resource utilization statistics were investigated for use as independent variables to predict service execution time using four different modeling approaches. Overall CPU time was the strongest predictor of execution time. The strength of individual predictors varied with respect to the resource utilization profiles of the applications. CPU statistics including idle time and number of context switches were good predictors when the test application was more disk I/O bound, while disk I/O statistics were better predictors when the application was more CPU bound. All performance models built were effective at determining the best performing service composition deployments validating the utility of our approach

    Migration of Multi-Tier Applications to Infrastructure-As-A-Service Clouds: An Investigation Using Kernel-Based Virtual Machines

    Get PDF
    To investigate challenges of multi -tier application migration to Infrastructure -as-a- Service (IaaS) clouds we performed an experimental investigation by deploying a processor bound and input -output bound variant of the RUSLE2 erosion model to an IaaS base d private cloud. Scaling the applications to achieve optimal system throughput is complex and involves much more than simply increasing the number of allotted virtual machines (VMs). While scaling the application variants a series of bottlenecks were encountered unique to an application\u27s processing, I/O, and memory requirements, herein referred to as an application\u27s profile. To investigate the impact of provisioning variation for hosting multi -tier applications we tested four schemes of VM deployments across the physical nodes of our cloud. Performance degradation was more pronounced when multiple I/O or CPU resource intensive application components were co -located on the same physical hardware. We investigated the virtualization overhead incurred using Kernel -based virtual machines (KVM) by deploying our application variants to both physical and virtual machines. Overhead varied based on the unique characteristics of each application\u27s profile. We observed ~112% overhead for the input/output bound application and just ~ 10 % overhead for the processor bound application. Understanding an application\u27s profile was found to be important for optimal IaaS -based cloud migration and scaling

    The Virtual Machine (VM) Scaler: An Infrastructure Manager Supporting Environmental Modeling on IaaS Clouds

    Get PDF
    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific modeling as-a-service requires dynamic scaling of server infrastructure to adapt to changing user workloads. This paper presents the Virtual Machine (VM) Scaler, an autonomic resource manager for IaaS Clouds. We have developed VM-Scaler, a REST/JSON-based web services application which supports infrastructure provisioning and management to support scientific modeling for the Cloud Services Innovation Platform (CSIP) [Lloyd et al. 2012]. VM-Scaler harnesses the Amazon Elastic Compute Cloud (EC2) application programming interface to support model- service scalability, cloud management, and infrastructure configuration for supporting modeling workloads. VM-Scaler provides cloud control while abstracting the underlying IaaS cloud from the end user. VM-Scaler is extensible to support any EC2 compatible cloud and currently supports the Amazon public cloud and Eucalyptus private clouds versions 3.1 and 3.3. VM-Scaler provides a platform to improve scientific model deployment by supporting experimentation with: hot spot detection schemes, VM management and placement approaches, and model job scheduling/proxy services

    The Virtual Machine (VM) Scaler: An Infrastructure Manager Supporting Environmental Modeling on IaaS Clouds

    Get PDF
    Infrastructure-as-a-service (IaaS) clouds provide a new medium for deployment of environmental modeling applications. Harnessing advancements in virtualization, IaaS clouds can provide dynamic scalable infrastructure to better support scientific modeling computational demands. Providing scientific modeling as-a-service requires dynamic scaling of server infrastructure to adapt to changing user workloads. This paper presents the Virtual Machine (VM) Scaler, an autonomic resource manager for IaaS Clouds. We have developed VM-Scaler, a REST/JSON-based web services application which supports infrastructure provisioning and management to support scientific modeling for the Cloud Services Innovation Platform (CSIP) [Lloyd et al. 2012]. VM-Scaler harnesses the Amazon Elastic Compute Cloud (EC2) application programming interface to support model- service scalability, cloud management, and infrastructure configuration for supporting modeling workloads. VM-Scaler provides cloud control while abstracting the underlying IaaS cloud from the end user. VM-Scaler is extensible to support any EC2 compatible cloud and currently supports the Amazon public cloud and Eucalyptus private clouds versions 3.1 and 3.3. VM-Scaler provides a platform to improve scientific model deployment by supporting experimentation with: hot spot detection schemes, VM management and placement approaches, and model job scheduling/proxy services

    Data Provisioning for the Object Modeling System (OMS)

    Get PDF
    The Object Modelling System (OMS) platform supports initiatives to build or re - factor agro - environmental models and deploy them in different business contexts as model services on cloud computing platforms. Whether traditional desktop, client - server, or emerging cloud deployments, success especially at the enterprise level relies on stable and efficient data provisioning to the models. In this paper we describe recent experience and trends with tools and services to supply data for model inputs. Solutions range from simple pre - processing tools to data services deployed to cloud platforms. Also, systematic, sustained data stewardship and alignment with standards organizations impart stability to data provisioning efforts

    Toward impact-based monitoring of drought and its cascading hazards

    Get PDF
    Growth in satellite observations and modelling capabilities has transformed drought monitoring, offering near-real-time information. However, current monitoring efforts focus on hazards rather than impacts, and are further disconnected from drought-related compound or cascading hazards such as heatwaves, wildfires, floods and debris flows. In this Perspective, we advocate for impact-based drought monitoring and integration with broader drought-related hazards. Impact-based monitoring will go beyond top-down hazard information, linking drought to physical or societal impacts such as crop yield, food availability, energy generation or unemployment. This approach, specifically forecasts of drought event impacts, would accordingly benefit multiple stakeholders involved in drought planning, and risk and response management, with clear benefits for food and water security. Yet adoption and implementation is hindered by the absence of consistent drought impact data, limited information on local factors affecting water availability (including water demand, transfer and withdrawal), and impact assessment models being disconnected from drought monitoring tools. Implementation of impact-based drought monitoring thus requires the use of newly available remote sensors, the availability of large volumes of standardized data across drought-related fields, and the adoption of artificial intelligence to extract and synthesize physical and societal drought impacts.</p
    corecore